146 research outputs found

    Sorting photons by radial quantum number

    Get PDF
    The Laguerre-Gaussian (LG) modes constitute a complete basis set for representing the transverse structure of a {paraxial} photon field in free space. Earlier workers have shown how to construct a device for sorting a photon according to its azimuthal LG mode index, which describes the orbital angular momentum (OAM) carried by the field. In this paper we propose and demonstrate a mode sorter based on the fractional Fourier transform (FRFT) to efficiently decompose the optical field according to its radial profile. We experimentally characterize the performance of our implementation by separating individual radial modes as well as superposition states. The reported scheme can, in principle, achieve unit efficiency and thus can be suitable for applications that involve quantum states of light. This approach can be readily combined with existing OAM mode sorters to provide a complete characterization of the transverse profile of the optical field

    MotionBEV: Attention-Aware Online LiDAR Moving Object Segmentation with Bird's Eye View based Appearance and Motion Features

    Full text link
    Identifying moving objects is an essential capability for autonomous systems, as it provides critical information for pose estimation, navigation, collision avoidance, and static map construction. In this paper, we present MotionBEV, a fast and accurate framework for LiDAR moving object segmentation, which segments moving objects with appearance and motion features in the bird's eye view (BEV) domain. Our approach converts 3D LiDAR scans into a 2D polar BEV representation to improve computational efficiency. Specifically, we learn appearance features with a simplified PointNet and compute motion features through the height differences of consecutive frames of point clouds projected onto vertical columns in the polar BEV coordinate system. We employ a dual-branch network bridged by the Appearance-Motion Co-attention Module (AMCM) to adaptively fuse the spatio-temporal information from appearance and motion features. Our approach achieves state-of-the-art performance on the SemanticKITTI-MOS benchmark. Furthermore, to demonstrate the practical effectiveness of our method, we provide a LiDAR-MOS dataset recorded by a solid-state LiDAR, which features non-repetitive scanning patterns and a small field of view

    High-dimensional quantum key distribution based on mutually partially unbiased bases

    Get PDF
    We propose a practical high-dimensional quantum key distribution protocol based on mutually partially unbiased bases utilizing transverse modes of light. In contrast to conventional protocols using mutually unbiased bases, our protocol uses Laguerre-Gaussian and Hermite-Gaussian modes of the same mode order as two mutually partially unbiased bases for encoding, which leads to a scheme free from mode-dependent diffraction in long-distance channels. Since only linear and passive optical elements are needed, our experimental implementation significantly simplifies qudit generation and state measurement. Since this protocol differs from conventional protocols using mutually unbiased bases, we provide a security analysis of our protocol

    In-Domain GAN Inversion for Faithful Reconstruction and Editability

    Full text link
    Generative Adversarial Networks (GANs) have significantly advanced image synthesis through mapping randomly sampled latent codes to high-fidelity synthesized images. However, applying well-trained GANs to real image editing remains challenging. A common solution is to find an approximate latent code that can adequately recover the input image to edit, which is also known as GAN inversion. To invert a GAN model, prior works typically focus on reconstructing the target image at the pixel level, yet few studies are conducted on whether the inverted result can well support manipulation at the semantic level. This work fills in this gap by proposing in-domain GAN inversion, which consists of a domain-guided encoder and a domain-regularized optimizer, to regularize the inverted code in the native latent space of the pre-trained GAN model. In this way, we manage to sufficiently reuse the knowledge learned by GANs for image reconstruction, facilitating a wide range of editing applications without any retraining. We further make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property. Such a trade-off sheds light on how a GAN model represents an image with various semantics encoded in the learned latent distribution. Code, models, and demo are available at the project page: https://genforce.github.io/idinvert/

    REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos

    Full text link
    Reconstructing dynamic 3D garment surfaces with open boundaries from monocular videos is an important problem as it provides a practical and low-cost solution for clothes digitization. Recent neural rendering methods achieve high-quality dynamic clothed human reconstruction results from monocular video, but these methods cannot separate the garment surface from the body. Moreover, despite existing garment reconstruction methods based on feature curve representation demonstrating impressive results for garment reconstruction from a single image, they struggle to generate temporally consistent surfaces for the video input. To address the above limitations, in this paper, we formulate this task as an optimization problem of 3D garment feature curves and surface reconstruction from monocular video. We introduce a novel approach, called REC-MV, to jointly optimize the explicit feature curves and the implicit signed distance field (SDF) of the garments. Then the open garment meshes can be extracted via garment template registration in the canonical space. Experiments on multiple casually captured datasets show that our approach outperforms existing methods and can produce high-quality dynamic garment surfaces. The source code is available at https://github.com/GAP-LAB-CUHK-SZ/REC-MV.Comment: CVPR2023; Project Page:https://lingtengqiu.github.io/2023/REC-MV
    • ā€¦
    corecore